Skip to primary navigation Skip to content
December 15, 2023

Science Committee Leaders Stress Importance of Diligence in NIST AI Safety Research Funding

(WASHINGTON, DC) – Yesterday, House Science, Space, and Technology Committee Ranking Member Zoe Lofgren (D-CA) and Chairman Frank Lucas (R-OK) sent a letter to the National Institute of Standards and Technology (NIST) Director Laurie Locascio regarding the establishment of the Artificial Intelligence Safety Institute (AISI) at NIST. Subcommittee on Research and Technology Ranking Member Haley Stevens (D-MI) and Chairman Mike Collins (R-GA) along with Subcommittee on Investigations and Oversight Ranking Member Valerie Foushee (D-NC) and Chairman Jay Obernolte (R-CA) also joined in signing the letter.

The AISI, which was established in President Biden’s October 30 Artificial Intelligence Executive Order, is tasked with developing technical guidance to address the risks of AI. The AISI will also fund extramural research on AI safety issues. In the letter sent to NIST, Members stress the need for NIST to award research funds transparently and to promote scientific and methodological quality among its AI safety research partners. The Members also request a briefing from NIST to discuss the AISI process and use of funds.

“Unfortunately, the current state of the AI safety research field creates challenges for NIST as it navigates its leadership role on the issue,” said the Members in the letter. “Findings within the community are often self-referential and lack the quality that comes from revision in response to critiques by subject matter experts. There is also significant disagreement within the AI safety field of scope, taxonomies, and definitions.”

The Members continued, “Members of the House Committee on Science, Space, and Technology have long supported NIST’s important work to advance trustworthiness in AI systems, including through the National Artificial Intelligence Initiative Act of 2020 and the CHIPS and Science legislation. We believe this work should not be rushed at the expense of doing it right. Developing novel evaluation suites complete with appropriate metrics for AI trustworthiness across successive generations of large language models could itself take years – without taking into account how these AI systems are deployed across sectors and use cases. As NIST prepares to fund extramural research on AI safety, scientific merit and transparency must remain a paramount consideration. In implementing the AISI, we expect NIST to hold the recipients of federal research funding for AI safety research to the same rigorous guidelines of scientific and methodological quality that characterize the broader federal research enterprise.”

A copy of the letter can be found here.

###